231 research outputs found
An implementation of Deflate in Coq
The widely-used compression format "Deflate" is defined in RFC 1951 and is
based on prefix-free codings and backreferences. There are unclear points about
the way these codings are specified, and several sources for confusion in the
standard. We tried to fix this problem by giving a rigorous mathematical
specification, which we formalized in Coq. We produced a verified
implementation in Coq which achieves competitive performance on inputs of
several megabytes. In this paper we present the several parts of our
implementation: a fully verified implementation of canonical prefix-free
codings, which can be used in other compression formats as well, and an elegant
formalism for specifying sophisticated formats, which we used to implement both
a compression and decompression algorithm in Coq which we formally prove
inverse to each other -- the first time this has been achieved to our
knowledge. The compatibility to other Deflate implementations can be shown
empirically. We furthermore discuss some of the difficulties, specifically
regarding memory and runtime requirements, and our approaches to overcome them
Certified compilation for cryptography: Extended x86 instructions and constant-time verification
We present a new tool for the generation and verification of high-assurance high-speed machine-level cryptography implementations: a certified C compiler supporting instruction extensions to the x86. We demonstrate the practical applicability of our tool by incorporating it into supercop: a toolkit for measuring the performance of cryptographic software, which includes over 2000 different implementations. We show i. that the coverage of x86 implementations in supercop increases significantly due to the added support of instruction extensions via intrinsics and ii. that the obtained verifiably correct implementations are much closer in performance to unverified ones. We extend our compiler with a specialized type system that acts at pre-assembly level; this is the first constant-time verifier that can deal with extended instruction sets. We confirm that, by using instruction extensions, the performance penalty for verifiably constant-time code can be greatly reduced.This work is financed by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within the project PTDC/CCI-INF/31698/2017, and by the Norte Portugal Regional Operational Programme (NORTE 2020) under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (ERDF) and also by national funds through the FCT, within project NORTE-01-0145-FEDER-028550 (REASSURE)
Functional Big-step Semantics
When doing an interactive proof about a piece of software, it is important that the underlying programming language’s semantics does not make the proof unnecessarily difficult or unwieldy. Both smallstep and big-step semantics are commonly used, and the latter is typically given by an inductively defined relation. In this paper, we consider an alternative: using a recursive function akin to an interpreter for the language. The advantages include a better induction theorem, less duplication, accessibility to ordinary functional programmers, and the ease of doing symbolic simulation in proofs via rewriting. We believe that this style of semantics is well suited for compiler verification, including proofs of divergence preservation. We do not claim the invention of this style of semantics: our contribution here is to clarify its value, and to explain how it supports several language features that might appear to require a relational or small-step approach. We illustrate the technique on a simple imperative language with C-like for-loops and a break statement, and compare it to a variety of other approaches. We also provide ML and lambda-calculus based examples to illustrate its generality
On Models and Code:A Unified Approach to Support Large-Scale Deductive Program Verification
Despite the substantial progress in the area of deductive program verification over the last years, it still remains a challenge to use deductive verification on large-scale industrial applications. In this abstract, I analyse why this is case, and I argue that in order to solve this, we need to soften the border between models and code. This has two important advantages: (1) it would make it easier to reason about high-level behaviour of programs, using deductive verification, and (2) it would allow to reason about incomplete applications during the development process. I discuss how the first steps towards this goal are supported by verification techniques within the VerCors project, and I will sketch the future steps that are necessary to realise this goal
A formally verified compiler back-end
This article describes the development and formal verification (proof of
semantic preservation) of a compiler back-end from Cminor (a simple imperative
intermediate language) to PowerPC assembly code, using the Coq proof assistant
both for programming the compiler and for proving its correctness. Such a
verified compiler is useful in the context of formal methods applied to the
certification of critical software: the verification of the compiler guarantees
that the safety properties proved on the source code hold for the executable
compiled code as well
Handling polymorphic algebraic effects
Algebraic effects and handlers are a powerful abstraction mechanism to
represent and implement control effects. In this work, we study their extension
with parametric polymorphism that allows abstracting not only expressions but
also effects and handlers. Although polymorphism makes it possible to reuse and
reason about effect implementations more effectively, it has long been known
that a naive combination of polymorphic effects and let-polymorphism breaks
type safety. Although type safety can often be gained by restricting let-bound
expressions---e.g., by adopting value restriction or weak polymorphism---we
propose a complementary approach that restricts handlers instead of let-bound
expressions. Our key observation is that, informally speaking, a handler is
safe if resumptions from the handler do not interfere with each other. To
formalize our idea, we define a call-by-value lambda calculus that supports
let-polymorphism and polymorphic algebraic effects and handlers, design a type
system that rejects interfering handlers, and prove type safety of our
calculus.Comment: Added the errata for the ESOP'19 paper (page 28
The state of the art in the analysis of two-dimensional gel electrophoresis images
Software-based image analysis is a crucial step in the biological interpretation of two-dimensional gel electrophoresis experiments. Recent significant advances in image processing methods combined with powerful computing hardware have enabled the routine analysis of large experiments. We cover the process starting with the imaging of 2-D gels, quantitation of spots, creation of expression profiles to statistical expression analysis followed by the presentation of results. Challenges for analysis software as well as good practices are highlighted. We emphasize image warping and related methods that are able to overcome the difficulties that are due to varying migration positions of spots between gels. Spot detection, quantitation, normalization, and the creation of expression profiles are described in detail. The recent development of consensus spot patterns and complete expression profiles enables one to take full advantage of statistical methods for expression analysis that are well established for the analysis of DNA microarray experiments. We close with an overview of visualization and presentation methods (proteome maps) and current challenges in the field
Mechanized semantics for the Clight subset of the C language
This article presents the formal semantics of a large subset of the C
language called Clight. Clight includes pointer arithmetic, "struct" and
"union" types, C loops and structured "switch" statements. Clight is the source
language of the CompCert verified compiler. The formal semantics of Clight is a
big-step operational semantics that observes both terminating and diverging
executions and produces traces of input/output events. The formal semantics of
Clight is mechanized using the Coq proof assistant. In addition to the
semantics of Clight, this article describes its integration in the CompCert
verified compiler and several ways by which the semantics was validated.Comment: Journal of Automated Reasoning (2009
Molecular and cellular mechanisms underlying the evolution of form and function in the amniote jaw.
The amniote jaw complex is a remarkable amalgamation of derivatives from distinct embryonic cell lineages. During development, the cells in these lineages experience concerted movements, migrations, and signaling interactions that take them from their initial origins to their final destinations and imbue their derivatives with aspects of form including their axial orientation, anatomical identity, size, and shape. Perturbations along the way can produce defects and disease, but also generate the variation necessary for jaw evolution and adaptation. We focus on molecular and cellular mechanisms that regulate form in the amniote jaw complex, and that enable structural and functional integration. Special emphasis is placed on the role of cranial neural crest mesenchyme (NCM) during the species-specific patterning of bone, cartilage, tendon, muscle, and other jaw tissues. We also address the effects of biomechanical forces during jaw development and discuss ways in which certain molecular and cellular responses add adaptive and evolutionary plasticity to jaw morphology. Overall, we highlight how variation in molecular and cellular programs can promote the phenomenal diversity and functional morphology achieved during amniote jaw evolution or lead to the range of jaw defects and disease that affect the human condition
Program Verification in the Presence of I/O
Software veri?cation tools that build machine-checked proofs of functional correctness usually focus on the algorithmic content of the code. Their proofs are not grounded in a formal semantic model of the environment that the program runs in, or the program’s interaction with that environment. As a result, several layers of translation and wrapper code must be trusted. In contrast, the CakeML project focuses on endto-end veri?cation to replace this trusted code with veri?ed code in a cost-e?ective manner. In this paper, we present infrastructure for developing and verifying impure functional programs with I/O and imperative ?le handling. Specifically, we extend CakeML with a low-level model of ?le I/O, and verify a high-level ?le I/O library in terms of the model. We use this library to develop and verify several Unix-style command-line utilities: cat, sort, grep, di? and patch. The work?ow we present is built around the HOL4 theorem prover, and therefore all our results have machine-checked proofs
- …